Since the introduction of ChatGPT in November 2022, generative AI (GenAI) has been described as everything from a novelty and an economic boon to a threat to humanity. As this debate continued, GenAI took center stage at the RSA Conference 2023 with the introduction of and subsequent hoopla around Microsoft Security Copilot. Many other vendors have introduced similar capabilities since. Few would argue against the idea that GenAI (and AI in general) will have a profound impact on society and global economics, but in the near term, it introduces new risks as employees connect to GenAI applications, share data, and build homegrown large language models (LLMs) of their own. These actions will inevitably expand the attack surface, open new threat vectors, introduce software vulnerabilities, and lead to data leakage.
Despite these risks, generative AI holds great cybersecurity potential. Generative AI could help improve security team productivity, accelerate threat detection, automate remediation actions, and guide incident response. These prospective benefits are so compelling that many CISOs are already experimenting with GenAI or building their own security LLMs. At the same time, security professionals remain anxious about how cybercriminals may use GenAI as part of attack campaigns and how they can defend against these advances.
Have organizations embraced GenAI for cybersecurity today, and what will they do in the future? To gain further insight into these trends, TechTarget’s Enterprise Strategy Group surveyed 370 IT and cybersecurity professionals at organizations in North America (US and Canada) responsible for cyber-risk management, threat intelligence analysis, and security operations, with visibility into current GenAI usage and strategic plans.